convolution 3 3
Method for Generating Synthetic Data Combining Chest Radiography Images with Tabular Clinical Information Using Dual Generative Models
Kikuchi, Tomohiro, Hanaoka, Shouhei, Nakao, Takahiro, Takenaga, Tomomi, Nomura, Yukihiro, Mori, Harushi, Yoshikawa, Takeharu
The generation of synthetic medical records using Generative Adversarial Networks (GANs) is becoming crucial for addressing privacy concerns and facilitating data sharing in the medical domain. In this paper, we introduce a novel method to create synthetic hybrid medical records that combine both image and non-image data, utilizing an auto-encoding GAN (alphaGAN) and a conditional tabular GAN (CTGAN). Our methodology encompasses three primary steps: I) Dimensional reduction of images in a private dataset (pDS) using the pretrained encoder of the {\alpha}GAN, followed by integration with the remaining non-image clinical data to form tabular representations; II) Training the CTGAN on the encoded pDS to produce a synthetic dataset (sDS) which amalgamates encoded image features with non-image clinical data; and III) Reconstructing synthetic images from the image features using the alphaGAN's pretrained decoder. We successfully generated synthetic records incorporating both Chest X-Rays (CXRs) and thirteen non-image clinical variables (comprising seven categorical and six numeric variables). To evaluate the efficacy of the sDS, we designed classification and regression tasks and compared the performance of models trained on pDS and sDS against the pDS test set. Remarkably, by leveraging five times the volume of sDS for training, we achieved classification and regression results that were comparable, if slightly inferior, to those obtained using the native pDS. Our method holds promise for publicly releasing synthetic datasets without undermining the potential for secondary data usage.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- Asia > Japan > Honshū > Kantō > Chiba Prefecture > Chiba (0.04)
- Research Report > Experimental Study (0.56)
- Research Report > Promising Solution (0.48)
- Research Report > New Finding (0.47)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Simulator-Based Self-Supervision for Learned 3D Tomography Reconstruction
Kosomaa, Onni, Laine, Samuli, Karras, Tero, Aittala, Miika, Lehtinen, Jaakko
We propose a deep learning method for 3D volumetric reconstruction in low-dose helical cone-beam computed tomography. Prior machine learning approaches require reference reconstructions computed by another algorithm for training. In contrast, we train our model in a fully self-supervised manner using only noisy 2D X-ray data. This is enabled by incorporating a fast differentiable CT simulator in the training loop. As we do not rely on reference reconstructions, the fidelity of our results is not limited by their potential shortcomings. We evaluate our method on real helical cone-beam projections and simulated phantoms. Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions. When applied to full-dose data, our method produces high-quality results orders of magnitude faster than iterative techniques.
Unsupervised Deep Video Denoising
Sheth, Dev Yashpal, Mohan, Sreyas, Vincent, Joshua L., Manzorro, Ramon, Crozier, Peter A., Khapra, Mitesh M., Simoncelli, Eero P., Fernandez-Granda, Carlos
Deep convolutional neural networks (CNNs) currently achieve state-of-the-art performance in denoising videos. They are typically trained with supervision, minimizing the error between the network output and ground-truth clean videos. However, in many applications, such as microscopy, noiseless videos are not available. To address these cases, we build on recent advances in unsupervised still image denoising to develop an Unsupervised Deep Video Denoiser (UDVD). UDVD is shown to perform competitively with current state-of-the-art supervised methods on benchmark datasets, even when trained only on a single short noisy video sequence. Experiments on fluorescence-microscopy and electron-microscopy data illustrate the promise of our approach for imaging modalities where ground-truth clean data is generally not available. In addition, we study the mechanisms used by trained CNNs to perform video denoising. An analysis of the gradient of the network output with respect to its input reveals that these networks perform spatio-temporal filtering that is adapted to the particular spatial structures and motion of the underlying content. We interpret this as an implicit and highly effective form of motion compensation, a widely used paradigm in traditional video denoising, compression, and analysis. Code and iPython notebooks for our analysis are available in https://sreyas-mohan.github.io/udvd/ .
- North America > United States > New York (0.04)
- Asia > India (0.04)
- Health & Medicine (0.93)
- Leisure & Entertainment > Sports (0.46)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.66)
Noise2Noise: Learning Image Restoration without Clean Data
Lehtinen, Jaakko, Munkberg, Jacob, Hasselgren, Jon, Laine, Samuli, Karras, Tero, Aittala, Miika, Aila, Timo
We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: under certain common circumstances, it is possible to learn to restore signals without ever observing clean ones, at performance close or equal to training using clean exemplars. We show applications in photographic noise removal, denoising of synthetic Monte Carlo images, and reconstruction of MRI scans from undersampled inputs, all based on only observing corrupted data.
- Media (0.93)
- Health & Medicine > Diagnostic Medicine > Imaging (0.87)